23 research outputs found

    Comparison of hands-free speech-based navigation techniques for virtual reality training

    Get PDF
    When it comes to Virtual Reality (VR) training, the depicted scenarios can be characterized by a high level of complexity and extent. Speech-based interaction techniques can provide an intuitive, natural and effective way to navigate large Virtual Environments (VEs) without the need for handheld controllers, which may impair the execution of manual tasks or prevent the use of wearable haptic devices. In this study, three hands-free speech-based navigation techniques for VR, a speech-only technique, a speech with gaze variant (gaze to point to the destination, speech as trigger), and a combination of the fist two are compared by deploying them to a large VE representing a common industrial setting (a hangar). A within-subjects user study was carried out in order to assess the usability and the performance of the considered techniques

    AR-MoCap: Using augmented reality to support motion capture acting

    Get PDF
    Technology is disrupting the way films involving visual effects are produced. Chroma-key, LED walls, motion capture (mocap), 3D visual storyboards, and simulcams are only a few examples of the many changes introduced in the cinema industry over the last years. Although these technologies are getting commonplace, they are presenting new, unexplored challenges to the actors. In particular, when mocap is used to record the actors’ movements with the aim of animating digital character models, an increase in the workload can be easily expected for people on stage. In fact, actors have to largely rely on their imagination to understand what the digitally created characters will be actually seeing and feeling. This paper focuses on this specific domain, and aims to demonstrate how Augmented Reality (AR) can be helpful for actors when shooting mocap scenes. To this purpose, we devised a system named AR-MoCap that can be used by actors for rehearsing the scene in AR on the real set before actually shooting it. Through an Optical See-Through Head- Mounted Display (OST-HMD), an actor can see, e.g., the digital characters of other actors wearing mocap suits overlapped in real- time to their bodies. Experimental results showed that, compared to the traditional approach based on physical props and other cues, the devised system can help the actors to position themselves and direct their gaze while shooting the scene, while also improving spatial and social presence, as well as perceived effectiveness

    User perception of robot's role in floor projection-based Mixed-Reality robotic games

    Get PDF
    Within the emerging research area represented by robotic gaming and, specifically, in application domains in which the recent literature suggests to combine commercial off-the-shelf (COTS) robots and projected mixed reality (MR) technology in order to develop engaging games, one of the crucial issues to consider in the design process is how to make the player perceive the robot as having a key role, i.e., to valorize its presence from the user experience point of view. By moving from this consideration, this paper reports efforts that are being carried out with the aim to investigate the impact of diverse game design choices in the above perspective, while at the same time extracting preliminary insights that can be exploited to orient further research in the field of MR-based robotic gaming and related scenarios

    Immersive Movies: The Effect of Point of View on Narrative Engagement

    Get PDF
    Cinematic Virtual Reality (CVR) offers filmmakers a wide range of possibilities to explore new techniques regarding movie scripting, shooting and editing. Despite the many experiments performed so far with both live action and computer-generated movies, just a few studies focused on analyzing how these cinematic techniques actually affect the viewers’ experience. Like in traditional cinema, a key step for CVR screenwriters and directors is to choose from which perspective the viewers will see the scene, the so-called point of view (POV). The aim of this paper is to understand to what extent watching an immersive movie from a specific POV could impact the narrative engagement (NE), i.e., the viewers’ sensation of being immersed in the movie environment and being connected with its characters and story. Two POVs that are typically used in CVR, i.e., first-person perspective (1-PP) and external perspective (EP), are investigated through a user study in which both objective and subjective metrics were collected. The user study was carried out by leveraging two live action 360° short films with distinct scripts. The results suggest that the 1-PP experience could be more pleasant than the EP one in terms of overall NE and narrative presence, or even for all the NE dimensions if the potential of that POV is specifically exploited

    Comparing state-of-the-art and emerging augmented reality interfaces for autonomous vehicle-to-pedestrian communication

    Get PDF
    In the last few years, a considerable literature has grown around the theme of how to provide pedestrians and other vulnerable road users (VRUs) with a clear indication about a fully autonomous vehicle (FAV)'s status and intentions, which is crucial to make FAVs and VRUs coexist. So far, a variety of external interfaces leveraging different paradigms and technologies have been created. Proposed designs include vehicle-mounted devices (like LED panels), short-range on-road projection, and road infrastructure interfaces (e.g., special asphalts with embedded displays). These designs have been experimented in different settings, using mockups, specially prepared vehicles, or virtual environments, with heterogeneous evaluation metrics. Some promising interfaces based on Augmented Reality (AR) have been proposed too, but their usability and effectiveness have not been tested yet. This paper aims to complement such body of literature by presenting a comparison of state-of-the-art interfaces and new designs under common conditions. To this aim, an immersive Virtual Reality-based simulation was developed, recreating a well-known scenario used in previous works represented by pedestrian crossing in urban environments under non-regulated conditions. A user study was then performed to investigate the various dimensions of vehicle-to-pedestrian interaction in both objective and subjective terms. Results showed that, although an interface clearly standing out over all the considered dimensions does not exists, one of the studied AR designs was able to provide state-of-the-art results in terms of safety and trust, at the cost of a higher cognitive effort and lower intuitiveness compared to LED panels showing anthropomorphic features. Together with rankings on the various dimensions, indications about advantages and drawbacks of the various alternatives that emerged from the study could be an important information source for next developments in the field

    An evaluation testbed for locomotion in virtual reality

    Get PDF
    A common operation performed in Virtual Reality (VR) environments is locomotion. Although real walking can represent a natural and intuitive way to manage displacements in such environments, its use is generally limited by the size of the area tracked by the VR system (typically, the size of a room) or requires expensive technologies to cover particularly extended settings. A number of approaches have been proposed to enable effective explorations in VR, each characterized by different hardware requirements and costs, and capable to provide different levels of usability and performance. However, the lack of a well-defined methodology for assessing and comparing available approaches makes it difficult to identify, among the various alternatives, the best solutions for selected application domains. To deal with this issue, this paper introduces a novel evaluation testbed which, by building on the outcomes of many separate works reported in the literature, aims to support a comprehensive analysis of the considered design space. An experimental protocol for collecting objective and subjective measures is proposed, together with a scoring system able to rank locomotion approaches based on a weighted set of requirements. Testbed usage is illustrated in a use case requesting to select the technique to adopt in a given application scenario

    Blockchain and NFTs-based Trades of Second-hand Vehicles

    Get PDF
    Recently, the automotive industry has been characterized by disruptive innovations, like self-driving cars or hybrid/electric engines. Despite this fact, some operations, such as the trade of second-hand vehicles, still continue to be carried out in the “traditional” way, in which the buyer has to trust the seller about the state of the vehicle. Several studies highlighted that odometer fraud alone could cost around 8.9 billion euros per year. In order to overcome these limitations, which are related to information asymmetries between buyers and sellers, in this work we propose to exploit blockchain technology to store a previous vehicle’s history in a transparent way. To further explore blockchain advantages, we also present how a decentralized second-hand vehicle market – enabling also automatic transfers of ownership upon monetary transfers – can be built, leveraging on Non-Fungible Tokens (NFTs). We propose an architecture and a practical implementation of a Decentralized Application (Dapp) and discuss the security of the proposed system, its costs, and future developments

    Digital Twins for Industry 4.0 in the 6G Era

    Full text link
    Having the Fifth Generation (5G) mobile communication system recently rolled out in many countries, the wireless community is now setting its eyes on the next era of Sixth Generation (6G). Inheriting from 5G its focus on industrial use cases, 6G is envisaged to become the infrastructural backbone of future intelligent industry. Especially, a combination of 6G and the emerging technologies of Digital Twins (DT) will give impetus to the next evolution of Industry 4.0 (I4.0) systems. This article provides a survey in the research area of 6G-empowered industrial DT system. With a novel vision of 6G industrial DT ecosystem, this survey discusses the ambitions and potential applications of industrial DT in the 6G era, identifying the emerging challenges as well as the key enabling technologies. The introduced ecosystem is supposed to bridge the gaps between humans, machines, and the data infrastructure, and therewith enable numerous novel application scenarios.Comment: Accepted for publication in IEEE Open Journal of Vehicular Technolog

    Towards the adoption of virtual reality training systems for the self-tuition of industrial robot operators: A case study at KUKA

    No full text
    Interest is growing around Virtual Reality training systems (VRTSs), which started to be considered as a credible option to train companies’ workforce. Although the efficacy of VRTSs as a fancy alternative to traditional learning material used by trainers in their lectures has already been proved, their effectiveness as self-learning tools not requiring human instructors is still controversial, since experiments carried out within established training programmes are still rare. This paper reports the results of a user study aimed to investigate how an immersive VRTS designed to train industrial robot operators on light maintenance tasks actually compares to the training practice based on in-class and hands-on sessions that is adopted worldwide by an international company. After the training, study participants were evaluated by company's instructor while performing the taught task autonomously on a real robot. Obtained results showed that the effectiveness of the devised VRTS in making the trainees able to successfully complete the task was comparable to that of the traditional training. It was also found that there is still room for improvement, e.g., on virtually training interaction with physical tools and equipment, as well as on emulating and fortifying aspects of human trainer-trainee social dynamics

    Is learning by teaching an effective approach in mixed-reality robotic training systems?

    No full text
    In recent years, there has been an increasing interest in the extended reality training systems (XRTSs), including an expanding integration of such systems in the industry training program. Despite pedagogists have developed multiple didactic models with the aim of ameliorating the effectiveness of knowledge transfer, the vast majority of XRTSs are sticking to the practice of adapting the traditional model approach. Besides, other approaches are started to be considered, like the Learning by Teaching (LBT), but for other kinds of intelligent training systems like those involving service robots. In the presented work, a mixed-reality robotic training system (MRRTS) devised with the capability of supporting the LBT is presented. A study involving electronic engineering students with the aim of evaluating the effectiveness of the LBT pedagogical model when applied to a MRRTS by comparing it with a consolidated approach is performed. Results indicated that while both approaches granted a good knowledge transfer, the LBT was far superior in terms of long-term retention of the information at the cost of a higher time spent in training
    corecore